16 research outputs found

    Perfect is the Enemy of Good: Best-Effort Program Synthesis (Artifact)

    Get PDF
    Program synthesis promises to help software developers with everyday tasks by generating code snippets automatically from input-output examples and other high-level specifications. The conventional wisdom is that a synthesizer must always satisfy the specification exactly. We conjecture that this all-or-nothing paradigm stands in the way of adopting program synthesis as a developer tool: in practice, the user-written specification often contains errors or is simply too hard for the synthesizer to solve within a reasonable time; in these cases, the user is left with a single over-fitted result or, more often than not, no result at all. In this paper we propose a new program synthesis paradigm we call best-effort program synthesis, where the synthesizer returns a ranked list of partially-valid results, i.e., programs that satisfy some part of the specification. To support this paradigm, we develop best-effort enumeration, a new synthesis algorithm that extends a popular program enumeration technique with the ability to accumulate and return multiple partially-valid results with minimal overhead. We implement this algorithm in a tool called BESTER, and evaluate it on 79 synthesis benchmarks from the literature. Contrary to the conventional wisdom, our evaluation shows that BESTER returns useful results even when the specification is flawed or too hard: i) for all benchmarks with an error in the specification, the top three BESTER results contain the correct solution, and ii) for most hard benchmarks, the top three results contain non-trivial fragments of the correct solution. We also performed an exploratory user study, which confirms our intuition that partially-valid results are useful: the study shows that programmers use the output of the synthesizer for comprehension and often incorporate it into their solutions

    Perfect Is the Enemy of Good: Best-Effort Program Synthesis

    Get PDF
    Program synthesis promises to help software developers with everyday tasks by generating code snippets automatically from input-output examples and other high-level specifications. The conventional wisdom is that a synthesizer must always satisfy the specification exactly. We conjecture that this all-or-nothing paradigm stands in the way of adopting program synthesis as a developer tool: in practice, the user-written specification often contains errors or is simply too hard for the synthesizer to solve within a reasonable time; in these cases, the user is left with a single over-fitted result or, more often than not, no result at all. In this paper we propose a new program synthesis paradigm we call best-effort program synthesis, where the synthesizer returns a ranked list of partially-valid results, i.e. programs that satisfy some part of the specification. To support this paradigm, we develop best-effort enumeration, a new synthesis algorithm that extends a popular program enumeration technique with the ability to accumulate and return multiple partially-valid results with minimal overhead. We implement this algorithm in a tool called BESTER, and evaluate it on 79 synthesis benchmarks from the literature. Contrary to the conventional wisdom, our evaluation shows that BESTER returns useful results even when the specification is flawed or too hard: i) for all benchmarks with an error in the specification, the top three BESTER results contain the correct solution, and ii) for most hard benchmarks, the top three results contain non-trivial fragments of the correct solution. We also performed an exploratory user study, which confirms our intuition that partially-valid results are useful: the study shows that programmers use the output of the synthesizer for comprehension and often incorporate it into their solutions

    AmiGo: Computational Design of Amigurumi Crochet Patterns

    Full text link
    We propose an approach for generating crochet instructions (patterns) from an input 3D model. We focus on Amigurumi, which are knitted stuffed toys. Given a closed triangle mesh, and a single point specified by the user, we generate crochet instructions, which when knitted and stuffed result in a toy similar to the input geometry. Our approach relies on constructing the geometry and connectivity of a Crochet Graph, which is then translated into a crochet pattern. We segment the shape automatically into chrochetable components, which are connected using the join-as-you-go method, requiring no additional sewing. We demonstrate that our method is applicable to a large variety of shapes and geometries, and yields easily crochetable patterns.Comment: 11 pages, 10 figures, SCF 202

    The Three Pillars of Machine Programming

    Get PDF
    In this position paper, we describe our vision of the future of machine programming through a categorical examination of three pillars of research. Those pillars are:(i) intention,(ii) invention, and (iii) adaptation. Intention emphasizes advancements in the human-to-computer and computer-to-machine-learning interfaces. Invention emphasizes the creation or refinement of algorithms or core hardware and software building blocks through machine learning (ML). Adaptation emphasizes advances in the use of ML-based constructs to autonomously evolve software

    Learn&Fuzz: Machine Learning for Input Fuzzing

    No full text
    Abstract. Fuzzing consists of repeatedly testing an application with modified, or fuzzed, inputs with the goal of finding security vulnerabilities in input-parsing code. In this paper, we show how to automate the generation of an input grammar suitable for input fuzzing using sample inputs and neural-network-based statistical machine-learning techniques. We present a detailed case study with a complex input format, namely PDF, and a large complex security-critical parser for this format, namely, the PDF parser embedded in Microsoft's new Edge browser. We discuss (and measure) the tension between conflicting learning and fuzzing goals: learning wants to capture the structure of well-formed inputs, while fuzzing wants to break that structure in order to cover unexpected code paths and find bugs. We also present a new algorithm for this learn&fuzz challenge which uses a learnt input probability distribution to intelligently guide where to fuzz inputs

    Symbolic Automata for Static Specification Mining

    No full text
    Abstract. We present a formal framework for static specification mining. The main idea is to represent partial temporal specifications as symbolic automata – automata where transitions may be labeled by variables, and a variable can be substituted by a letter, a word, or a regular language. Using symbolic automata, we construct an abstract domain for static specification mining, capturing both the partialness of a specification and the precision of a specification. We show interesting relationships between lattice operations of this domain and common operators for manipulating partial temporal specifications, such as building a more informative specification by consolidating two partial specifications.

    Discovery of functional toxin/antitoxin systems in bacteria by shotgun cloning

    No full text
    Toxin-antitoxin (TA) modules, composed of a toxic protein and a counteracting antitoxin, play important roles in bacterial physiology. We examined the experimental insertion of 1.5 million genes from 388 microbial genomes into an Escherichia coli host using over 8.5 million random clones. This revealed hundreds of genes (toxins) that could only be cloned when the neighboring gene (antitoxin) was present on the same clone. Clustering of these genes revealed TA families widespread in bacterial genomes, some of which deviate from the classical characteristics previously described for such modules. Introduction of these genes into E. coli validated that the toxin toxicity is mitigated by the antitoxin. Infection experiments with T7 phage showed that two of the new modules can provide resistance against phage. Moreover, our experiments revealed an 'anti-defense' protein in phage T7 that neutralizes phage resistance. Our results expose active fronts in the arms race between bacteria and phage

    Histologic and Radiographic Characteristics of Bone Filler Under Bisphosphonates

    No full text
    BACKGROUND: Dental implants and bone augmentation are well-established procedures used for oral rehabilitation. There is an increasing interest in biological mediators used topically for prevention of bone resorption maybe enhancement of osseointegration of dental implants. The purpose of the manuscript is to describe preliminarily the effect of bisphosphonates on the ossification pattern of bone grafts in a rat model. MATERIAL AND METHODS: Twenty Wistar-derived male rats were divided into 2 groups study and control. Bone substitute was added to mandibular defects and was covered by a resorbable collagen membrane. In the study group, the membrane was soaked with bisphosphonates suspension. In the control group, the membrane was soaked with saline solution. Radiographic and histomorphometric evaluation were performed. RESULTS: Radiographically, it was found that bone density was significantly higher in the study group. Histomorphometric analysis revealed a trend of higher bone volume fraction along with reduced bone substitute volume fraction in the study group, and increased number of osteoclasts and blood vessels in the control group. CONCLUSIONS: Within the limitations of our study it was found that there is a trend of increasing bone quantity and radiographic bone density by application of bisphosphonates

    Symbolic automata for representing big code

    No full text
    corecore